# setup
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import scipy
import scipy.integrate
# setup: has to be in a separate cell for some reason
plt.rcParams['figure.figsize'] = [10, 5]
# from last time
def run_sim(N0, gen_fn, ngens, dtype='int', **kwargs):
"""
Runs some simulations, and return the results in the columns of an array.
N0: initial conditions for the simulations
gen_fn: the function that takes the current array of population sizes and some more arguments,
and returns the array of population sizes in the next time step
ngens: number of time steps to run the simulations for
dtype: the numeric type of the result (defaults to integers)
**kwargs: more arguments passed to `gen_fn()`
returns: A (ngens, len(N0)) array of type `dtype`.
"""
N = np.empty((ngens, len(N0)), dtype=dtype)
N[0, :] = N0
for t in range(1, ngens):
N[t, :] = gen_fn(N[t-1, :], **kwargs)
return N
# demonstration of using *args and **kwargs
def f(a=0, b=0):
return a + b
print(f(2, 3))
args = [2,3]
print("with *args:", f(*args))
print("f(4) = ", f(4))
kwargs = {'b': 7, 'a': 10}
f(**kwargs)
5 with *args: 5 f(4) = 4
17
Today we'll look at differential equation models, which are
Both of these are approximations, at least at some level, but provide a very useful big-picture look at how a model "ought to behave".
Last time we studied a model of logistic population growth. Reparameterizing a bit, in this model the population size at time step $k$, denoted $N_k$, satisfies $$\begin{aligned} N_{k+1} - N_k &= N_k \left( \lambda - \mu - \frac{\lambda N_k}{K} \right) + \epsilon, \end{aligned}$$ where
Imagine our first model was very coarse-grained: we'd defined one "time step" to be 100 years, and now we want to refine it, using the "same" model, but with one time step equal to one year. To do this, we'd need to:
The carrying capacity would stay the same, and the variance of $\epsilon$ would decrease along with the number of births and deaths per time step.
Let's say instead that one original time step is one "unit of time", that we now break up into $T$ time steps. Then, the equation above is: $$\begin{aligned} N_{t+1/T} - N_t &= \frac{1}{T} N_t \left( \lambda - \mu - \frac{\lambda N_t}{K} \right) + \epsilon . \end{aligned}$$
The first idea of how this should behave comes from by ignoring noise (setting $\epsilon = 0$). Then, multiplying by $T$ and taking $T \to \infty$ (the limit of small time steps), we end up with the logistic equation, $$\begin{aligned} \frac{d}{dt} N_t &= N_t \left( \lambda - \mu - \frac{\lambda N_t}{K} \right). \end{aligned}$$ (Well, it's the logistic equation we'd find on Wikipedia if we set $r = \lambda - \mu$ and change $K$ to $K r / \lambda$.)
Solving differential equations
can be done with scipy.integrate
.
How's this differ from iterating the discrete, deterministic equation?
Not much, in this case.
lam = 0.1
mu = 1 - 0.95
K = 2000
def logistic_eqn(t, y):
return y * (lam - mu - lam * y / K)
y0 = np.array([0.6, 0.8, 1.0, 1.2, 1.4]) * K / 2
logistic_solns = scipy.integrate.solve_ivp(
fun = logistic_eqn,
t_span = (0.0, 120.0),
y0 = y0,
t_eval=range(121))
def logistic_gen(N, lam, mu, K):
return (np.random.poisson(lam * N * np.fmax(0.0, 1 - N / K), len(N))
+ np.random.binomial(N, 1 - mu, len(N)))
N = run_sim(y0,
gen_fn=logistic_gen, ngens=120,
lam=lam, mu=mu, K=K)
plt.plot(N)
plt.plot(logistic_solns.y.T, linestyle='--')
plt.xlabel("time")
plt.ylabel("pop size")
plt.title("Logistic: theory and simulation")
plt.show()
Compare the solution to the differential equation to iteration of the deterministic equation we had before:
def logistic_step(N, lam, mu, K):
return N + (lam - mu) * N * (1 - lam * N / (K * (lam - mu)))